专利摘要:
system and method for tracking objects under occlusion. a method for tracking objects in a scene may include receiving visual based information from the scene with a vision based tracking system and scene based telemetry information with an rtls based tracking system. the method can also include determining a location and identity of a first object in the scene using a combination of information based on visual and information based on telemetry. Another method for tracking objects in a scene may include detecting a location and identity of a first object and determining a telemetry-based measurement between the first object and a second object using a real-time tracking system based on ( rtls). the method may include determining the location and identity of a second object based on detecting the location of the first object and the measurement determined. a system for tracking objects in a scene can include telemetry-based and visual-based information receivers and an object tracker.
公开号:BR112012013525B1
申请号:R112012013525-2
申请日:2010-12-09
公开日:2021-03-30
发明作者:Smader Gefen
申请人:Disney Enterprises, Inc;
IPC主号:
专利说明:

FIELD OF THE INVENTION
[001] The personifications of the present invention refer to computer vision, image processing, and Real-Time Location Systems (RTLS). BACKGROUND
[002] Computer vision-based methods for tracking multiple objects rely on distinctive appearance and known models of object movement to continuously locate and identify them in the scene. Generally, the fidelity of positional data generated by visual based methods is high. However, when the objects being tracked move in clusters, the tracking process is complicated. Many techniques face the challenge of tracking objects subjected to occlusion. Such algorithms that try to solve this problem are generally successful, providing that the object has a distinctive appearance and its movement is consistent. However, in practice, objects can have a similar appearance and their movement under occlusion can be unpredictable. Such situations occur in a team team (football, basketball, etc.) where players tend to crowd into clusters.
[003] An alternative to visual based tracking systems is RTLS technology. An RTLS is a location technology designed to detect and track people as well as goods. This includes carriers (transmission / response components) attached to dynamic objects, handheld or fixed readers (transceivers - transmission / response - components), and a server application. An RTLS methodology encompasses several technologies including infrared, sound, ultrasound, Wi-Fi, radio frequency identification (RFID), Ultra broadband, GPS, and Cellular. Each technology is best compatible for a given application depending on parameters such as power requirement, series, applicability inside versus outside, spatial accuracy (granularity), latency, and data series.
[004] The essential thing for RTLS is the transponder's characteristic (tag). Tracked objects can be tagged by passive, semi-passive, or active tags. A passive transponder does not have a battery and therefore does not initiate communication. This is enabled by the received signal and accounts for the reflection of this signal using a technique called backscatter. The reflected signal is modulated with the data stored in a label memory. The series of a passive tag (more than 100 meters) is a function of the signal strength of the tag's reader and antenna. It is physically small and light, with no processing capacity, and therefore inexpensive. Like passive transponders, semi-passive transponders do not initiate communication, but use a backscatter technique to respond to the received signal. However, they have their own batteries (up to 10 years of life) that are used mainly to charge room sensors, temperature and movement measurements, for example, or to increase the range of operation. The series and the operational size are comparable to those of the passive transponder. On the other hand, active transponders are equipped with a battery (up to 5 years of useful life) that is used to charge your circuit and generate signal transmission. Therefore, active transponders can initiate communication periodically or when triggered by an attached sensor. However, frequent transmissions consume more energy and shorten battery life. Depending on the type of battery, an active transponder is physically larger than a passive transponder, its range can reach up to several hundred meters, and its processing capacity is better if it has a computer chip.
[005] Although RTLS is a promising and emerging technique, it suffers from physical difficulties, including weakening of multiple paths, signal attenuation, limited data series, latency, and most importantly, the need to have multiple lines of view (LOS). In some applications, an RTLS is required to identify and locate objects with high fidelity and accuracy. For example, in tracking players on a team team, the positional data of players during a broadcast live game is instrumental for real-time annotation (telestration) and for computing player performance statistics. Since most of the time players move in circles quickly and in close proximity to each other, significant tracking performance should be accurate to less than one foot.
[006] What is needed are methods to use vision-based technology and RTLS technology to track players in a game who are occluders in a cluster. BRIEF DESCRIPTION OF THE DRAWINGS
[007] The personifications of the invention are described with reference to the accompanying drawings. In the drawings, as reference numbers they can indicate similar elements similarly and functionally. In drawings where an element appears first, it is indicated by the leftmost digit in the corresponding reference number.
[008] Figures 1A-1B show top-level blocking diagrams of a proposed tracking system according to an impersonation.
[009] Figure 2 shows a diagram of a three-position positioning method in two-dimensional cases according to a personification.
[010] Figure 3 shows a diagram that demonstrates the use of a line of sight (LOS) to resolve errors in identifying objects according to a personification.
[011] Figure 4 illustrates the positioning of occluded objects in an agglomeration according to a personification.
[012] Figure 5 illustrates the positioning of occluded objects in a football game according to a personification.
[013] Figure 6 illustrates possession detection and tracking using transponders attached to a set of objects according to an impersonation.
[014] Figure 7 illustrates a block diagram of a synergistic tracking system according to an impersonation.
[015] Figure 8 illustrates an example of a real-time location system (RLTS) reader according to an impersonation.
[016] Figure 9 illustrates an exemplary RTLS tag according to an impersonation.
[017] Figure 10 illustrates an exemplary RTLS time diagram according to an embodiment.
[018] Figure 11 illustrates an example of a computing device that can be used in personifications of the invention. DETAILED DESCRIPTION
[019] Methods and systems for tracking objects during an event are provided. While the present invention is described here with reference to illustrative embodiments for particular applications, it is to be understood that the invention is not limited to the same. Those skilled in the art will access the teachings provided here and will recognize additional modifications, applications, and personifications within the scope of the same and additional fields in which the invention would be of significant use.
[020] The personifications described here refer to the use of RLTS technology in combination with vision-based technology to track and identify multiple objects in real time. Personifications can also refer to a system in which vision-based technology and RTLS are combined to increase the fidelity of the tracking system as a whole. Future personifications can describe a method for continually maintaining the tracking and identification of multiple objects, a possibility with similar appearances and subject to permanent occlusion, using only a camera and a reader.
[021] The tracking systems and methods described in personifications of the present invention are described below in the context of tracking players, judges, support staff, and related objects (puck, ball, etc.) on a team team. A person skilled in the art will appreciate that the system and methods of this invention can be applied to a wide range of events including, but not limited to, any sporting event, as well as non-sporting applications that require tracking and / or identification of one or more objects in a scene.
[022] Both vision-based tracking system and RTLS-based tracking system when used regardless of covering technological limitations that stand in the way of achieving object positioning with acceptable performance. Aspects of this invention invent methods where the indeterminate tracking results provided by each technology are independently merged into a complete tracking solution.
[023] The current state of the art RLTS allows the identification and location of dynamic objects in series with an accuracy between 0.3 and 3 meters. Nevertheless, in the presence of a phenomenon such as the weakening of multiple paths, attenuation, occlusion, etc., the position data is not deterministic, but rather, it carries an uncertain spatial region represented by the Probability Density Function (PDF ). Depending on the number of lines of sight (LOS) available, this PDF can reach within a small neighborhood of object locations or can reach across a wide region. Used alone, an RTLS based tracking system may not be able to provide a complete solution.
[024] Vision-based systems may include those described in U.S.Patent Application No. 12 / 403,857 by Gefen, incorporated herein by reference in their entirety. A challenge for vision-based systems includes resolving the identity of the objects being tracked when their appearances are similar. This problem can be especially precise when objects are divided from the cluster. One aspect of a personification of this invention combines positional data from accurate vision-based objects with RTLS-based object identification data to re-tag objects as they are occluded or divided from a cluster.
[025] Another challenge is to accurately track and position objects with a cluster, especially when objects move randomly in close proximity. According to some personifications, the position of objects that are occluded relative to the camera and its reader is resolved as follows. A vision-based tracking system detects isolated objects (objects that are not in close proximity to each other) in relative high fidelity. Therefore, when an isolated object can “see” an occluded object (it has a LOS to it) the distance between them can be measured. In such a configuration, the transponders attached to the objects in the scene continuously send the reader and signals to each one. These signals carry telemetry that is processed later to derive the distances between the reader and the transponders as well as the distances between the transponders themselves. In this personification, at least one camera and at least one reader use the LOS between the isolated objects and the occluded objects in the scene to position these occluded objects that would otherwise be invisible.
[026] In some cases, the location of a labeled object can be derived based on serial techniques and based on estimated positioning techniques. Serial techniques use telemetry, such as Arrival Time (TOA), to derive distances between readers and transponders. TOA is the time it takes a signal to travel from a reader to a transponder and / or a transponder to a reader. Given the speed of signal propagation, the distance can be computed. Note that to achieve significant distance estimates, the transponder and the reader must be exactly synchronized. Another known telemetry is the Angle of Arrival (AOA). AOA is the angle between the direction of signal propagation and a reference axis. AOA requires an additional antenna (matrix) and its accuracy is highly series dependent (for a large series, a small error in the angle measurement results in a large error in the position estimate). The received signal strength indicator (RSSI) is yet another common telemetry. This measures the attenuation of the received signals to derive the distance. However, attenuation can be affected by factors including multipath weakening, temperature, humidity, and occluded objects. Other telemetry known in the art include Arrival Time Difference (TDOA), Flight Time (TOF), and Total Travel Time (RTT). The accuracy of the distances computed based on these telemetry is limited by the level of technology (clocks and resolution of synchronization or precision of the matrix antenna) and the conditions in the scene (humidity or obstacles).
[027] Position estimation techniques can include trilateration and triangulation. In a three-dimensional case, a trilateration technique estimates the coordinates of a point A in space using at least: 1) four points given in space with known coordinates and 2) the distances between these four points to point A. Note that these four given distance points define spheres centered on the four given points, and that their intersection only defines point A. similarly, a triangulation technique estimates the coordinates of a point A in space using at least: 1) three given points in space with known coordinates and 2) the angle between the line connecting these points and point A to a reference line. Note that these three given angles define central cones at the three given points, and that their intersection only defines point A. Therefore, in the general case, the trilateral positioning technique requires at least four lines of sight (LOS) between a transmitter and a receiver and a synchronization system and a very accurate clock. While a triangulation positioning technique requires at least three LOS and a directional antenna with sufficient array elements to satisfy the required angular resolution. If more than the required minimum required LOS is available, at least one square error estimation method can be applied to minimize the position estimation error.
[028] An exemplary synergistic system 100 for object tracking, such as tracking multiple objects, is shown in figure 1A, according to an impersonation. One or more cameras 110 are used to cover a dynamic scene 120 of multiple objects that are moving rapidly over each other, usually in clusters. Note that these objects may have similar appearances - particularly in a team game where the objects belong to the same team (attack, defense or judges) exhibit a similar appearance (team uniforms). In some personifications, the cameras can be statistically positioned to cover the scene or can be dynamically translated and directed to cover the center of activity. In addition to the cameras, one or more readers can be positioned in scene 130. Similar to the cameras, the locations of the readers can be static or dynamic to allow good reception of the transponder signal transmission (labeled objects). The video and telemetry signals from readers can be fed into the object tracking system 140. According to one personification, an object tracking system 140 can use vision-based techniques to continuously locate objects in the scene, including handheld readers in view. In addition, according to some personifications, RTLS technology can be used to resolve a screening under occlusion, as will be explained in detail below. The raw positional data generated by the 140 system can be processed locally or sent to a third party for processing. A GUI 150 application can include a functionality control system, data visualization and the presentation of the statistics that may be available to an operator.
[029] Both sight-based locator and RLTS-based locator may require lines of sight (LOS) for the object, according to some personifications. While in the case of video, it is impossible to pinpoint the location and trace an object through processing applied to the pixels that come from a segment of the object's image, RTLS may require more than one LOS to locate a tagged object. In a free three-dimensional case, three LOS are required when using a triangulation positioning technique and four LOS when using a three-positioning technique. In the presence of 1) few LOS due to occlusion or failure of signal reception, or 2) inherent error in the telemetry data, the location of the labeled object may come with an uncertain region represented by a spatial probability function - Density Function of Probability (PDF) - and formulated both in the coordinates of image space and in the space of the world. This PDF can be used as background information in a probabilistic scheme for object tracking, such as a common particle filtering tracking method.
[030] Figure 1B illustrates a personification102 of an object tracking system. In this case, the object tracking system 140 may include a visual based information receiver 170 for receiving visual information, such as video signals or pixel data, collected by the visual system 110. Such a system may include one or more cameras surrounding the scene. The object tracking system 140 may include a telemetry based information receiver 160 for receiving telemetry based information, such as TOA or AOA data, from an RLTS reader 130. Object tracker 180 can be configured to use visual based information and telemetry based information to locate and identify an object in the scene. Whenever visual based information is unable to locate and identify the object in the scene, object tracker 180 can use telemetry based information measured from one or more objects in the scene to locate and identify the object. Whenever telemetry-based information is unable to locate and identify the object in the scene, object tracker 180 can use visual based information on the object to locate and identify it. In addition, the combination of telemetry-based information and visual-based information can be used to determine only the location and identity of an object when neither telemetry-based information nor visual-based information, when used independently, they are sufficient to determine only the location and the identity of the object. This can be done for multiple objects in an event or scene. System 102 can precisely locate and identify multiple objects, including objects that are occluded, recently occluded, or in close proximity to other objects in an agglomeration.
[031] Exemplary systems 100 and 102 or any part of systems 100 and 102 can be a part of or can be run by one or more computer devices. A computer device can be any type of computer device having one or more processors. For example, a computer device can be a workstation, a mobile device (example, a cell phone, a personal digital assistant, or a laptop), a computer, a server, a computer cluster, a farm server, a game console, a game cover box, kiosk, an embedded system or any other device having at least one processor and memory. The personifications of the present invention can be software executed by a processor, firmware, hardware or any combination of them in a computing device. According to one personification, object tracking system 140 can be implemented at various locations in the object distribution path.
[032] Figure 2 shows three scenarios for localization based on RTLS in the two-dimensional case, according to personifications of the invention. In each scenario there are three readers (or transponders attached to players), 210, 212, and 214, with the series of transponders attached to a 216 player. In the first case 200, all three readers have a LOS on the transponder. Therefore, three distance measurements between the transponder and each reader are available. Based on the distances, the trilateration technique will result in an estimate for the location of the 216 player. This estimated location is represented by a PDF centered on the intersection of three spatial circles 218. The extent of such a region uncertainly (or a standard deviation of PDF ) may be in the order of magnitude of a few feet, depending on the fidelity and accuracy of specific RTLS system technology.
[033] In the second case 202, a player labeled 216 is blocked by other players. As a result, only two readers, 210 and 214, have a LOS for the transponder, so only two distance measurements are available. Therefore, one can expect the player to be in one of the two intersecting regions, 220 or 222. In the third case 204, a player labeled 216 is occluded by two other players. As a result, only one reader, 210, has a LOS for the transponder so that only a distance measurement is available. Based on this distance measurement, the player is expected to be anywhere along the spatial circle centered on the player's location. (the width of the circle corresponds to the error inherent in the given telemetry). This spatial uncertainty in relation to a player's whereabouts can be resolved by merging the tracking information on the visual basis and the information on the RLTS basis according to the personifications of this invention.
[034] A method according to a personification of this invention can determine the identity of objects subjected to occlusion. For example, figure 3 shows how a common object identification error can be solved by supplementing a visual based tracking method with RTLS including a reader. Assuming that player 314, and player 316 belong to the same team, techniques based on standard recognition will be limited because of the similar appearance as a whole of the two players. Also assuming that at time t0 both players started a movement on a path where both are correctly tracked and tagged by the tracking system, so as soon as they cross each other's path their projected images will merge and separate at time tI. Often, after separation, vision-based methods can provide the exact positions of the players, but their identity may be changed incorrectly due to the similar appearance of the two players. On the other hand, an RTLS 310 reader provides the likely positions of two players along circles 318 and 320 along with their identities. By merging the data on a visual basis (exact location of the player) with the data based on telemetry (identity of the players), the object tracking system 140 can resolve the location and identity of each player after separation.
[035] Another aspect of this invention allows the location and identification of objects when they are occluded relative to the camera and the reader, as shown in Figure 4. In this example of embodiment, a camera 410 and a reader 414 cover scene 412. Typically, the scene includes players who move in very close proximity to each other 420, 430. These players are separable and can be located precisely by a visual based tracking method. The challenge lies in locating the players precisely who are positioned in a cluster 418, and therefore are each occluded relative to camera 410 and reader 414. As pictured in figure 4, the camera and reader have LOS 416 to players 420 -430, and as a result, the position and identity of these players is known. On the other hand, the agglomerator 418 and, therefore, its projected image may be inseparable from the projected image of the entire agglomerator and, therefore, unrecognizable. However, there is a LOS between this 432 player and 420-426 players. Since the positions of these players are known (either by RTLS or visual based methods), the position of the occluded player 432 can be restored using a trilateration technique, for example. Note that in this configuration each transponder (such as TOA) refers to both 1) the distance between this transponder and the reader and 2) the distance between this transponder and other transponders with its LOS. In the event that a transponder does not have an LOS to the reader, it will transmit its data to the reader through another transponder.
[036] In such a multiple access communication system, signals from transponders and readers can collide and can cancel each other out, leading to inefficient use of broadband due to the need for repeated transmissions. The known anti-collision algorithms can be used, according to the personifications, and are designed to coordinate these simultaneous communications through multi-access protocols so that the total identification time as well as the energy consumption of the transponders is minimized . Communication signal arbitrage techniques are already commonly used for new satellites and cell phone networks. Procedures such as: multiple space division access (SDMA), multiple frequency domain access (FDMA), multiple time domain access (TDMA), and code division multiple access (CDMA) are known in the art for handle transceiver to transceiver interference.
[037] The personifications of this invention can be applied to the identification and tracking of a football game as shown in figure 5. A football game is a challenging scene for analysis only. It consists of small fragments (games), starting with both attack and defense players positioned in a certain formation. At the beginning of the game, the goal of the offensive team is to advance the ball towards the end zone, while the goal of the defending team is to oppose this advantage. During this short duration of a move (a few seconds) the players are moving quickly, huddling together, and physically blocking each other. In this case, it is desirable to be able to identify and locate players as they position themselves in a formation along the line of scrimmage (or the extent to which they position themselves to kick off), and to keep tracking through the duration of the match.
[038] According to a personification, this can be applied with a system comprising a 510 camera and a 514 reader (each positioned at any advantageous novelty point), transponders attached to the referees or any support person in the vicinity. For example, similarly to the scenario shown in figure 4, the defender (QB) may be blocked by the lineman 518, but he may have a LOS for the left side (FB), the midfield (HB), the receiver (WR ), or any other player, judge, or serial support personnel. Therefore, according to this invention, the occluded QB player can be restored using his distance to the FB, HB, WR, and / or other players with known positions. Another example is to resolve the IDs and positions of the 518 line players. The line players are positioned in close proximity along the 516 scrimmage line and therefore should not be separable and / or identifiable by the visual based method alone. However, their formation along the line of scrimmage (as represented by the rules of the game) provides a constraint that along with PDFs derived by a reader is insufficient to determine the location of these players. For example, six line players result in six PDFs circle formations, each centered on the position of the reader and with a range equal to the distance between the reader and the corresponding player. The intersections between these circles and the formation of line men result in the position of each of the players. The formation of a man's line, in turn, can be detected by visual based methods.
[039] These methods, as applied to the identification and tracking of football players in a personification described here, present an opportunity for analyzes of a higher level of parameters such as team formation, classification of the game, etc. These methods may also allow monitoring of each of the active 11 players on each team and may allow indicating who out of the 53 available players on each team is now playing on the field. This should be done either by tracking the total of 106 football players continuously, or by enabling tracking only for those players who are on the field while disabling the tracker for other players who are currently off the field. In some cases, there may be a simulation in which not all players are tagged. In this case, the identification and tracking of the unlabeled players can be done by visual based methods and methods where a game formation and rules are used to deduce the likely position and identify the unlabeled players.
[040] The personification method described above is especially useful in tracking small objects such as a ball in a basketball game that is often occluded by some players and is difficult to detect and track using vision alone. It is also applicable to resolve an autoocclusion when the posture of an articulated object is detected and tracked as shown in figure 6.
[041] Figure 6 shows an example of personification where the movement of an articulated object - a baseball batter, for example - is calculated by tracking the articulation of the object. In this case, auto occlusion complicates tracking on a visual basis. Here, too, RTLS technology can be used to supplement the technology deficiency on a visual basis. In figure 6, transponders 602-610 are attached to the joints of a baseball batter to allow for posture detection and tracking. While the ends of a human (the top of the head 602 and the ankles 604-606, for example) are with LOS 622 the camera 620 and the reader 630, and therefore relatively simple to detect and track (for example, methods of head detection known in the art), other locations (such as knees, elbows, shoulders 608-610) can be 1) self-occluded relative to the camera and reader or 2) difficult to extract with visual based techniques. In this case, for example, each transponder, 602, 604, and 606, measures the telemetry relative to all other transponders that have LOS to it, 608-610, and transmits this data to the reader along with the telemetry related to its own distance to the reader. In this embodiment, an RTLS technology that is best compatible for a short series and with small granulations can be used for communication between the 602-610 transponders, while another RTLS technology that is better compatible for a long series telemetry can be used for communication. between transponders 604-606 and readers.
[042] Figure 7 shows a mobile system for dynamically tracking objects according to a personification. In this system at least one camera 718 is configured to cover the scene; this can be a stationary or non-stationary camera, such as a broadcast camera. The video camera frames are fed into the 710 object tracking system for processing, one frame at a time. In addition, an RTLS 712 subsystem is integrated within the system, including at least one reader 714 and a combination of passive, semi-passive, and / or active 716 transponders. Transponders may have room sensors attached to them to measure variables including temperature, motion, and impact energy. Both video and camera and the RTLS subsystem communicate with the 710 object tracking system, either locally or remotely via cables or without cable, or using any other means of communication. The object tracking system 710 receives data from the RTLS subsystem 712 and camera 718, as well as manages and controls it.
[043] Depending on the camera, the 730 calibration can be loaded once in the time reset system (static camera), or it can be done in flight during the operation system (dynamic camera), according to some personifications. The known methods of calibration estimate the camera model by, for example, corresponding to the landmarks in the real world scene model, one can map a coordinated image space to the real world coordinates and vice versa.
[044] The deviation of each projected image of an object - referring to the measurements of objects - through a processing of the current one, and possibly of the various video frames, is done at around 750. Known methods for an antecedent subtraction generate a mask that outlines the foreground regions in the current video frame. These bubbles (foreground regions) segment out of the projected image of the players (moving objects) or agglomerations of players. Therefore, the measurement may include information derived from the pixels belonging to the object or the projective image of the objects. In this case where an object is isolated (a foreground region contains an image of a single object), accurate modeling and positioning of the object on the floor can be derived. At the other end, it is a challenge to model and position an object when it is submerged in a cluster of other objects. Thus, the positional data of isolated objects (along with other related data such as time and speed step) sent to the 740 processor can be used in the process of identifying and locating tagged objects, as shown in figure 4. This will be explained below .
[045] Process 720 of redefining and controlling the RTLS 712 subsystem. This process also collects the telemetry measurement by the RTLS subsystem. Depending on the RTLS technology in use, telemetry can be an Arrival Time (TOA), an Arrival Angle (AOA), an Received Signal Strength Indicator (RSSI), etc. This telemetry provided by the RTLS subsystem can be generated periodically or on demand.
[046] Next, in step 740, the distances between the reader 714 and the transponders 716, and the distances between the transponders (except when there is no LOS) are calculated. Then, the location of each identified tagged player is derived using 1) all available distances between this tagged player and other tagged players / readers and 2) the positions of the other tagged players / readers as given by step 750. As mentioned above, the estimated position based on RTLS can be represented by a spatial possibility function - PDF. The more readers / transponders with known locations and distances corresponding to the given object are available for the positioning method, the lower the entropy of the corresponding PDF (meaning an uncertainly smaller region),
[047] The identification of the players and corresponding PDFs are now used in step 760 to characterize the measurements. For example, an image (measure) of a cluster of players can now be segmented into sub-regions where each sub-region corresponds to a player in the cluster. This can be accomplished with a probabilistic agglomeration method using the previous information from given PDFs.
[048] Then, in step 770, these measured characteristics are associated with the current list of crawled objects 790. Finally, the tracked data of each tracked object - position, speed, identity, etc. - is updated in step 780 using known tracking methods, such as those described in U.S. Patent Application No. 12 / 403,857 by Gefen.
[049] Data fusion based on RTLS and on a visual basis, as described above, can be achieved through the processing of video frames received from an RTLS camera and telemetry receiver, according to personifications. Typically, the RTLS transfer data series is not the same as the camera frame series. An RTLS that uses a high frequency signal charger achieves a high data series. This, in turn, allows the system to accommodate a large number of transponders and allows each transponder to transmit large data packets; the higher the frequency, the faster the communication between readers and transponders. However, a high frequency signal carrier attenuates the speed, and therefore its range is more limited.
[050] In personifications where multiple transponders and readers are involved, depending on the specific RTLS technology, the generated RTLS telemetry can delay the corresponding visual based tracking data. This potential latency can extend to many video frames and require a synchronization mechanism, for example, attaching a time stamp to both data with Bse RTLS and with visual Bse that can later be used for synchronization. Therefore, where RTLS data transmission throughput is lower than the camera's frame series, telemetry may be available for the 710 Object Tracking System only for every N of video frames. Therefore, in this case, visual based tracking data interpolates lost data points where tracking data derived from telemetry is not available. Alternatively, the RTLS 712 can send a telemetry to the Object Tracking System 710 only when ordered to do so by the RTLS Control Unit 720. In this case, the TRLS based identification and location data will be ordered, for example, only when required to resolve the occlusion.
[051] An exemplary embodiment of the RTLS 712 subsystem is described below, applying ultra-wideband (UWB) communication technology. A UWB carrying less communication signal is defined as a signal with a bandwidth of at least 500MHz or with a bandwidth of at least 20% of central frequency. In 2002, the Federal Communication Commission (FCC) approved the transmission of a UWB communication signal in a 3.1GHz series, and at a spectral density below -41.3 dBm / MHz. The UWB signal exhibits excellent performance in a highly reflective (multipath) environment due to the duty cycle pulse. In addition, interference with other RF signals is minimal due to the non-overlapping of broadband frequencies and the difference in signal type. Consequently, UWB has become an attractive solution for applications where high data series and high resolution are required. The current development features with UWB based RTLS with less than 1 foot of location accuracy and an updated series of many milliseconds. In addition, the LOS for the target is still a requirement in order to allow real-time tracking.
[052] Figure 8 shows a UWB based RTLS reader component 800, according to a personification. The reader can consist of a traditional 810 RF (charge based) transmitter / receiver unit, a 840 receiver unit, and an 880 controller. The RF communication unit can receive and transmit modulated signals via an 812 antenna. An 814 circulator can directing signals coming from the antenna to an 816 signal amplifier. Similarly, signals coming out of the transmitter can be routed out of the antenna, while being prevented from passing through the receiver's amplifier. Driven by controller 880, transmitter 820 and receiver 822 can perform communication between the reader and the tags. A main purpose of this communication can be to control the label operations and possibly receive the sensory data from the label.
[053] A UWB 840 receiver can receive a UWB signal from which serial data is derived via an antenna 842. The UWB signal can first be filtered 844 and then amplified 846. The amplified signal can then be mixed 850 with a model signal generated by the model 848 generator. The model signal can be based on the waveform pulse used in the system and designed to extract the pulse from a UWB signal receiver through correlation achieved by mixing 850 and integration 852. The signal The integrator's analog signal can then be passed to a model and hold circuit 854 where a certain signal level is selected and converted into digital data by the ADC856. This digital data is translated into digital symbols 858 which are processed by the reader controller 880 where telemetry derivations such as TOA and AOA take place.
[054] The reader controller 880 may include a computing module 882, a memory module 881, a clock module 886, and a power source 888, according to a personification. The reader controller manages communication between the reader and the tags, and enables other readers. This can collect sensory and status data from tags (via its RF communication unit) and tag series data (via its UWB receiver). The reader controller 880 can compute the distances and angles between the readers and the labels, as well as the distances between the labels, and pass these measurements on to the RTLS 720 controller for processing.
[055] Figure 9 shows the component of the RTLS label based on UWB 900, according to a personification. The tag can consist of an RF transceiver / receiver unit (based on traditional 910 charging, a UWB 930 transmission unit, a UWB 950 backscatter transceiver, and a 970 control unit. Similar to readers, the RF tag transmitter / receiver 910 may include an antenna 920, a circulator 922, an amplifier 916 and 918, a receiver 912, and a transmitter 914, through this RF communication unit, the mode of operation of the tag can be configured by the reader and various status data and Sensors can be sent from a tag to a reader The UWB 930 transmitter can send UWB signals when triggered by the 970 controller. Therefore, the transmitter can receive an interrogation sequence from the controller and convert it into an analog signal using the DAC 932 from which pulses are generated by Pulse Generator 934. The UWB signal can then be amplified 936 and filtered 938 before being transmitted through antenna 940.
[056] In contrast to the UWB 930 transmitter, the UWB 950 backscatter receiver can merely respond to a UWB signal receiver. Isolation between the incoming and the backscattered UWB signals can be provided by a 962 circulator. An incoming UWB signal can be reflected back or absorbed by the 960 antenna depending on the antenna properties. The antenna modulator 952 is designed to configure the antenna, for example, by controlling the impedance of the antenna. Thus, the information can be encoded by, for example, reflection, absorption and / or changing the polarity of the incoming signal. In addition, modulator 952 can control the response of amplifier signal 954. Transceiver 950 may also include an output filter 958 and an input filter 956.
[057] The 970 tag controller is a processor model, including a 972 computer module, a 974 memory module, a 976 clock, and a 978 power source. analyzed and stored in the controller. The clock module controls the time at which UWB 930 signal transmission occurs and the time that the antenna modulator changes the antenna impedance and thus encodes data such as the tag ID within the UWB backscatter signal.
[058] As depicted in figure 4, an RTLS system can include at least one reader and a plurality of tags attached to targets of interest. At any time some tags may have a direct LOS for a reader and some may merely have an indirect LOS for the reader (meaning a LOS that passes through another tag with a direct LOS for the reader). Therefore, the system, according to a personification, finds both direct LOS and indirect LOS tags as follows.
[059] The reader 800 can allocate a time slot for a 900 tag. Through this time slot the specific tag is configured to operate in a master operating mode, while all other tags are configured in a slave operating mode, according to with an impersonation. When a tag is in the master operating mode it can be configured to transmit 930 a UWB signal - a sequence of very short pulses. This signal, denoted by S0 (t), can be received by a reader 840 UWB receiver if there is a direct LOS. This can also be received and backscattered by another 950 slave tag UWB mirroring transceiver that has an LOS to the master tag. Such a slave tag backscatter signal, denoted by Si (t) (i denotes a slave tag index), in sending back to the master tag 950 UWB backscatter transceiver where this is then backscattered to the reader. Note that, first, a UWB signal transmitter and a 930 tag UWB transmitter can occur in this case only when the tag is configured to operate in a master operating mode; and, second, that all tags (master and slave) backscatter a signal where the master tag is configured to backscatter only the Si (t) signals and all other slave tags are set up to only backscatter the Si (t) signal.
[060] Figure 10 demonstrates the progression of UWB signals as transmitted by master tag 1010 and as received by reader 1020, according to a personification. The configuration of the tag being in a master operating mode can periodically send UWB pulses: 1012a, 1012b, etc., separated by a “guard time”. Guard time can prevent interference between backscatter signals resulting in successive pulses. At reader 800, first, the signal S0 (t) is received: 1024a, 1024b, etc., with a delay time of T0. Then a backscatter signal from a slave tag, Si (t), can be received: 1026a, 1026b, etc., with a delay time of T1. Similarly, a backscatter signal from a second slave tag, S2 (t), can be received: 1028a, 1028b, etc., with a delay time of T2. Note that while T0 represents the time it takes S0 (t) to travel from a slave tag i, through a master tag, to a reader.
[061] These signal progression times (TOA) together with the knowledge of signal progression speed can be used by the reader 800 to calculate the distance between the reader and the master tag and the distances between the master tag and the slave tags , according to an impersonation. The reader 800 can be configured to measure, in addition, the signal progression time as well as the angle of arrival master tag signal (AOA) and other telemetry that can be instrumental in calculating the location of tags (RSSI, TDOA, TOF , RTT, etc.).
[062] The aspects described above, for the exemplary personification show in figures 1-10 or any part (s) or function (s) of the same can be implemented using hardware, software modules, firmware, a readable tangible computer or a usable computer for storing media having instructions stored there, or a combination thereof, and can be implemented in one or more computer systems or other processing systems. Figure 11 illustrates an exemplary computer system 1100 in which the personifications of the present invention, or parts thereof, can be implemented as computer-readable code. For example, an object tracking system 140, a visual information receiver 160, an RTLS information receiver 170, an object tracker 180 and / or any other components of the exemplary system shown in figures 1-10 can be implemented in one hardware, firmware, or as a human-readable code on a computer system such as a 1100 computer system. After reading this description, it will become apparent to a skilled person in the relevant art how to implement the invention using other computer systems. computer and / or computer architecture.
[063] A 1100 computer system includes one or more processors, such as a 1104 processor. The 1104 processor can be a special-purpose or general-purpose processor. Processor 1104 is connected to a communication infrastructure 1106 (for example, a bus or a work network).
[064] Computer system 1100 also includes main memory 1108, preferably random access memory (RAM) and may also include secondary memory 1110. Secondary memory 1110 may include, for example, a hard disk drive 1112 and / or a removable storage drive 1114. The removable storage drive 1114 may comprise a floppy drive, a magnetic tape drive, an optical disk drive, a fast memory, or the like. Removable storage drive 1114 reads from and / or writes to a removable storage drive 1118 in a well-known manner. The 1118 removable storage unit may comprise a floppy drive, a magnetic tape drive, an optical disk drive, etc., which is read by a writer for an 1114 removable storage disk. relevant art (s), the removable storage unit 1118 includes a usable computer storage medium having a computer software and / or data stored there.
[065] In alternative implementations, a secondary memory 1110 may include other similar means to allow programs or other instructions to be loaded onto the computer system 1100. Such means may include, for example, a removable storage unit 1122 and an interface 1120 Examples of such media may include a cartridge program and a cartridge interface (such as that found on video game devices), a removable memory chip (such as an EPROM, or PROM) and an associated socket, and another removable storage unit 1122 and interfaces 1120 that allow software and data to be transferred from a removable storage unit 1122 to a computer system 1100.
[066] The 1100 computer system may also include an 1124 communication interface. The 1124 communication interface allows software and data to be transferred between a 1100 computer system and external devices. The 1124 communications interface may include a modem, a work network interface (such as an Ethernet card), a communications port, a PCMCIA slot and a card, a wireless card, or the like. The software and data transmitted via an 1124 communications interface are in the form of signals that can be electronic, electromagnetic, optical, or other signals capable of being received by the 1124 communications interfaces. These signals are provided for the 1124 communications interface. via an 1126 communications path. The 1126 communications path carries signals and can be implemented using wires or cables, fiber optics, a phone line, a cell phone link, an RF link or other communication channels.
[067] In this document, the terms “computer program media” and “computer usable media” are used to generally refer to media such as a 1118 removable storage unit, an 1122 removable storage unit, a hard disk installed on a hard disk drive 1112, and signals loaded over the communications path 1126. A computer program medium and a usable computer medium can also refer to memories, such as a main memory 1108 and a secondary memory 1110 that can be memory semiconductors (eg, DRAMs, etc.). These computer program products are means for providing software for a 1100 computer system. Computer programs (also called computer control logic) are stored in main memory 1108 and / or in secondary memory 1110. computer can also be received via a communications interface 1124. Such computer programs, when executed, enable the computer system 1100 to implement the present invention as discussed here. In particular, computer programs, when executed, enable processor 1104 to implement the processes of the present invention, such as the steps in the methods described above. Therefore, such computer programs represent 1100 control system controllers. When the invention is implemented using software, the software can be stored in a computer program product and loaded into the computer system 1100 using a removable storage drive 1114 , an interface 1120, a hard disk 1112 or a communications interface 1124. The embodiments of the invention can also be directed to computer products comprising software stored on any usable computer medium. Such software, when run on one or more data processing devices, causes the data processing device to operate as described here. The personifications of the invention employ any usable computer or readable medium, known now or in the future. Examples of usable computer media include, but are not limited to, primary storage devices (example, any type of random access memory), secondary storage devices (example, hard drives, floppy disks, CD ROMS, ZIP disk, tapes, devices magnetic storage devices, optical storage devices, MEMS, nanotechnology storage devices, etc.), and media (eg, wired and wireless communications networks, local work area, large work area) , intranets, etc.).
[068] The present invention has been described above with the help of functional building blocks illustrating the implementation of the specified functions and their relationships. The boundaries of these functional building blocks have been arbitrarily defined here for the convenience of the description. Alternative limits can be defined as soon as the specified functions and relationships are performed properly.
[069] The aforementioned description of the specific personifications will fully reveal the general nature of the invention that others can, by applying knowledge with an expert in the art, promptly modify and / or adapt such specific personifications for various applications, without undue experimentation, without departing from the general concept of the present invention. Therefore, such adaptations and modifications are intended to be with the meaning and series of equivalents of the personification disclosed, based on the teachings and guidelines presented here. It is to be understood that the phraseology or terminology here is for the purpose of description and not limitation, so that the terminology or phraseology of the present specification is to be interpreted by expert craftsmen in the light of the teachings and guidelines.
[070] The scope and scope of the present invention should not be limited by any of the exemplary personifications mentioned above, but should be defined only in accordance with the following claims and their equivalents.
权利要求:
Claims (16)
[0001]
1. Computer implemented method to track objects in a scene, characterized by understanding: receiving information with visual basis of the scene with a tracking system with visual basis; receive information based on telemetry of the scene with a real-time tracking system based tracking system (RTLS); and when each visual based tracking system and based tracking system (RTLS) is unable to independently identify a location and an identity of a first object in the scene, determine the location and identity of the first object in a scene by merging of visual based information and information based on telemetry independent of a past trajectory of the first object, in which the based tracking system (RTLS) predicts the location of the first object to be located in at least one space ring, and in that the at least one space ring is centered around at least one tracking based reader (RTLS) or at least one transponder.
[0002]
2. Method, according to claim 1, characterized in that it further comprises: determining when no direct line of sight (LOS) is available between the first object and the camera of the visual based tracking system; and determine what telemetry-based information from the RTLS-based tracking system to use to locate and identify the first object.
[0003]
Method according to claim 1, characterized by comprising: determining when no direct line of sight (LOS) is available between the RTLS tag located on the first object and a reader of the RTLS-based tracking system; and determine what visual based information to use to locate and identify the first object.
[0004]
4. Method according to claim 1, characterized in that it further comprises using an RTLS based tracking system to determine a telemetric based measurement between the first object and a second object.
[0005]
Method according to claim 4, characterized in that it further comprises: transmitting a first signal from a first RTLS tag located on the first object to a second RTLS tag on the second object and to a reader of the tracking system based on RTLS; and transmitting a second signal from a second RTLS tag located on the second object through the first RTLS tag to the reader of the RTLS-based tracking system.
[0006]
6. Method according to claim 4, characterized by the fact that it additionally comprises a location and an identification of the first and the second object when the first and the second object are located in different parts of a larger object.
[0007]
7. Method, according to claim 1, characterized by the fact that it additionally comprises: segmenting out an image measure of the first object based on the location and identity of the first object; characterize the first object based on the image measurement of the first object; and associate the characteristic data of the first object with the tracking information corresponding to the first object.
[0008]
8. System for tracking objects in a scene, characterized by the fact that it comprises: a visual based information receiver configured to receive visual information from a scene with a visual based tracking system; a telemetry based information receiver configured to receive telemetry based information from the scene with an RTLS based tracking system; and an object tracker, implemented in a processor-based system, configured to locate and identify a first object on the scene by merging visual based information and telemetric based information when each visual based tracking system and tracking system based (RTLS) is unable to independently identify a location and an identity of a first object in the scene, where the fusion is independent of a past trajectory of a first object.
[0009]
9. System according to claim 8, characterized by the fact that the object tracker is configured to use information based on telemetry to determine a measurement between the first object and a second object.
[0010]
10. System according to claim 9, characterized by the fact that the telemetry-based information receiver is configured to receive a signal from an RTLS tag located on the first object and a second signal from a second RTLS tag located on the second object , where the second signal is transmitted via the RTLS tag.
[0011]
11. System, according to claim 8, characterized by the fact that the object tracker is configured to use information based on telemetry only when necessary to resolve the location and identity of the first object when one or more lines of sight ( LOS) between the first object and the camera of the visual based tracking system is prevented.
[0012]
12. System, according to claim 8, characterized by the fact that the object tracker is configured to use information on a visual basis only when necessary to resolve the location and identity of the first object when one or more lines of sight (LOS) ) between an RTLS tag on the first object and the telemetry-based tracking system reader is blocked.
[0013]
13. System, according to claim 8, characterized by the fact that the object tracker is configured to: segment out an image measure of the first object based on its location and identity of the first object; characterize the first object based on the image measurement of the first object; and associate the characteristic data of the first object with a tracking information corresponding to the first object.
[0014]
14. System for tracking objects in a scene, characterized by the fact of understanding; a tracking system configured to detect the location of a first object; a real time tracking system based (RTLS) system configured to: determine a first measurement between the first object and the RTLS based tracking system reader; and determining a second measurement between the first object and the second object; and an object tracker configured to determine the location and identity of the second object based on the detection of the location of the first object and the determined first and second measure.
[0015]
15. System according to claim 14, characterized by the fact that the RTLS object tracking system is configured to comprise a first RTLS tag located on the first object, configured to transmit a first signal to a second RTLS tag located on the second object and for the RTLS-based tracking system reader having information associated with the first object.
[0016]
16. System according to claim 14, characterized by the fact that the RTLS object tracking system is configured to comprise a second RTLS tag located on the second object, configured to transmit a second signal through the first RTLS tag located on the first object for the RTLS-based tracking system reader having information associated with the second object.
类似技术:
公开号 | 公开日 | 专利标题
BR112012013525B1|2021-03-30|SYSTEM AND METHOD FOR TRACKING OBJECTS UNDER OCCLUSION
US11156693B2|2021-10-26|Method, apparatus, and computer program product for employing a spatial association model in a real time location system
Li et al.2016|Dynamic-music: accurate device-free indoor localization
US9742450B2|2017-08-22|Method, apparatus, and computer program product improving registration with real time location services
US10609762B2|2020-03-31|Method, apparatus, and computer program product improving backhaul of sensor and other data to real time location system network
US9699278B2|2017-07-04|Modular location tag for a real time location system network
WO2015187991A1|2015-12-10|Systems, apparatus and methods for variable rate ultra-wideband communications
Sample et al.2012|Optical localization of passive UHF RFID tags with integrated LEDs
Santo et al.2017|Device-free and privacy preserving indoor positioning using infrared retro-reflection imaging
Ma et al.2018|Device-free, activity during daily life, recognition using a low-cost lidar
Li et al.2017|Online people tracking and identification with rfid and kinect
US10984146B2|2021-04-20|Tracking safety conditions of an area
ES2620703T3|2017-06-29|Device, procedure and software to provide a virtual frontier
Lee et al.2014|Optimus: online persistent tracking and identification of many users for smart spaces
Shetty2010|Weighted K-nearest neighbor algorithm as an object localization technique using passive RFID tags
US20210304577A1|2021-09-30|Integrated Camera and Ultra-Wideband Location Devices and Related Systems
Rafiee2014|Improving indoor security surveillance by fusing data from BIM, UWB and Video
US20200364381A1|2020-11-19|Cold storage environmental control and product tracking
Vastianos et al.2011|Indoor localization using passive RFID
Umek et al.2020|Validation of UWB positioning systems for player tracking in tennis
Khedo et al.2010|Development of Zone-base and Time-based Localisation Techniques for the RFID Technology
Han2016|Development of an indoor location tracking system for the operating room
Brooks2012|Local area positioning of multiple moving objects: a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy at Massey University, Palmerston North, New Zealand
Sarda2011|Read counts at multiple attenuation level algorithm as an object localization technique using passive RFID tags
Roshan2014|Eloping Prevention, Occupancy Detection and Localizing System for Smart Healthcare Applications
同族专利:
公开号 | 公开日
US20110135149A1|2011-06-09|
BR112012013525A2|2016-07-26|
EP2510377A2|2012-10-17|
EP2510377B1|2016-10-19|
MX2012006611A|2012-10-10|
US8731239B2|2014-05-20|
ES2609041T3|2017-04-18|
WO2011072123A3|2011-09-22|
WO2011072123A2|2011-06-16|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题

US3864662A|1973-05-18|1975-02-04|France Etat|Telemetry systems employing active transponders|
US6674877B1|2000-02-03|2004-01-06|Microsoft Corporation|System and method for visually tracking occluded objects in real time|
US6791603B2|2002-12-03|2004-09-14|Sensormatic Electronics Corporation|Event driven video tracking system|
US7822424B2|2003-02-24|2010-10-26|Invisitrack, Inc.|Method and system for rangefinding using RFID and virtual triangulation|
US7787886B2|2003-02-24|2010-08-31|Invisitrack, Inc.|System and method for locating a target using RFID|
US6998987B2|2003-02-26|2006-02-14|Activseye, Inc.|Integrated RFID and video tracking system|
JP2005167517A|2003-12-01|2005-06-23|Olympus Corp|Image processor, calibration method thereof, and image processing program|
US7929017B2|2004-07-28|2011-04-19|Sri International|Method and apparatus for stereo, multi-camera tracking and RF and video track fusion|
US8289390B2|2004-07-28|2012-10-16|Sri International|Method and apparatus for total situational awareness and monitoring|
US7424968B2|2004-08-27|2008-09-16|Futurelogic, Inc.|Method and apparatus for public street parking using RF and RFID technology|
US7154396B2|2004-12-30|2006-12-26|Nokia Corporation|Ultra wideband radio frequency identification techniques|
US7432802B2|2005-04-04|2008-10-07|Xlink Enterprises, Inc.|Autonomous interrogating transponder for direct communications with other transponders|
US9031279B2|2008-07-09|2015-05-12|Disney Enterprises, Inc.|Multiple-object tracking and team identification for game strategy analysis|NZ716581A|2010-01-05|2017-07-28|Isolynx Llc|Systems and methods for analyzing event data|
US8615254B2|2010-08-18|2013-12-24|Nearbuy Systems, Inc.|Target localization utilizing wireless and camera sensor fusion|
US9411037B2|2010-08-18|2016-08-09|RetailNext, Inc.|Calibration of Wi-Fi localization from video localization|
US10631712B2|2011-02-10|2020-04-28|Karl Storz Imaging, Inc.|Surgeon's aid for medical display|
US10674968B2|2011-02-10|2020-06-09|Karl Storz Imaging, Inc.|Adjustable overlay patterns for medical display|
JP5919665B2|2011-07-19|2016-05-18|日本電気株式会社|Information processing apparatus, object tracking method, and information processing program|
US9123135B2|2012-06-14|2015-09-01|Qualcomm Incorporated|Adaptive switching between vision aided INS and vision only pose|
US20140169758A1|2012-08-09|2014-06-19|Michael Sapoznikow|Systems and Methods for Tracking Players based on Video data and RFID data|
US9036910B1|2012-09-28|2015-05-19|The Boeing Company|Method and system for processing a sequence of images using fingerprints|
US8811670B2|2012-09-28|2014-08-19|The Boeing Company|Method and system for using fingerprints to track moving objects in video|
US9551779B2|2013-01-04|2017-01-24|Yariv Glazer|Controlling movements of pointing devices according to movements of objects|
US9092556B2|2013-03-15|2015-07-28|eagleyemed, Inc.|Multi-site data sharing platform|
WO2014197600A1|2013-06-04|2014-12-11|Isolynx, Llc|Object tracking system optimization and tools|
US10437658B2|2013-06-06|2019-10-08|Zebra Technologies Corporation|Method, apparatus, and computer program product for collecting and displaying sporting event data based on real time data for proximity and movement of objects|
US9180357B2|2013-06-06|2015-11-10|Zih Corp.|Multiple antenna interference rejection in ultra-wideband real time locating systems|
US10609762B2|2013-06-06|2020-03-31|Zebra Technologies Corporation|Method, apparatus, and computer program product improving backhaul of sensor and other data to real time location system network|
CN113050031A|2014-06-06|2021-06-29|斑马技术公司|Method, apparatus and computer program product for improving a real-time location system utilizing multiple location technologies|
US9607015B2|2013-12-20|2017-03-28|Qualcomm Incorporated|Systems, methods, and apparatus for encoding object formations|
US9589595B2|2013-12-20|2017-03-07|Qualcomm Incorporated|Selection and tracking of objects for display partitioning and clustering of video frames|
US9495759B2|2014-02-26|2016-11-15|Apeiros, Llc|Mobile, wearable, automated target tracking system|
US9759803B2|2014-06-06|2017-09-12|Zih Corp.|Method, apparatus, and computer program product for employing a spatial association model in a real time location system|
GB201502526D0|2015-02-16|2015-04-01|Nokia Technologies Oy|A 3D modelling system|
TWI549499B|2015-02-17|2016-09-11|Zan Quan Technology Co Ltd|A system for automatic recording motion data and a method thereof|
FR3037466A1|2015-06-12|2016-12-16|Move'n See|METHOD AND SYSTEM FOR AUTOMATICALLY POINTING A MOBILE UNIT|
DE102015118152A1|2015-10-23|2017-04-27|clownfisch information technology GmbH|A method for determining a position of a mobile unit|
GB2544561B|2015-11-23|2019-10-09|Time Machine Capital Ltd|Tracking system and method for determining relative movement of a player within a playing arena|
WO2017167708A1|2016-04-01|2017-10-05|Koninklijke Philips N.V.|Monitoring compliance with medical protocols based on occlusion of line of sight|
US10147218B2|2016-09-29|2018-12-04|Sony Interactive Entertainment America, LLC|System to identify and use markers for motion capture|
RU2653322C1|2016-12-21|2018-05-07|ООО "Ай Ти Ви групп"|Method of displaying objects in sequence of images received from stationary video camera|
US20190340769A1|2017-01-20|2019-11-07|Sony Corporation|Information processing apparatus, information processing method, and information processing system|
WO2018222532A1|2017-06-01|2018-12-06|Vid Scale, Inc.|Rfid based zoom lens tracking of objects of interest|
US10922871B2|2018-01-19|2021-02-16|Bamtech, Llc|Casting a ray projection from a perspective view|
KR20190094954A|2018-02-06|2019-08-14|삼성전자주식회사|Apparatus and method for tracking a movement of eletronic device|
EP3557559A1|2018-04-20|2019-10-23|TMRW Foundation IP & Holding S.A.R.L.|Sports events broadcasting systems and methods|
US10776672B2|2018-04-25|2020-09-15|Avigilon Corporation|Sensor fusion for monitoring an object-of-interest in a region|
ES2716814A1|2019-01-17|2019-06-17|Pigchamp Pro Europa S L|Integrated system of control, prediction and eradication of diseases in farms of animal production, through the combination of devices for movement control, environmental control and monitoring of animal health, and its method of application and assessment of biological risk |
US20200330830A1|2019-04-17|2020-10-22|PFC Shared Services, LLC|Golf Ball Tracking System|
WO2021026758A1|2019-08-13|2021-02-18|Intel Corporation|Technology to automatically locate ball-holding players in multi-camera video feeds|
CN112468735B|2021-01-26|2021-05-11|北京深蓝长盛科技有限公司|Video processing system and video processing method|
CN112437233B|2021-01-26|2021-04-16|北京深蓝长盛科技有限公司|Video generation method, video processing device and camera equipment|
CN113793365A|2021-11-17|2021-12-14|第六镜科技有限公司|Target tracking method and device, computer equipment and readable storage medium|
法律状态:
2019-01-08| B06F| Objections, documents and/or translations needed after an examination request according [chapter 6.6 patent gazette]|
2019-12-17| B15K| Others concerning applications: alteration of classification|Free format text: AS CLASSIFICACOES ANTERIORES ERAM: G01S 5/02 , G01S 3/786 Ipc: G01S 3/786 (1990.01), G01S 5/02 (1968.09) |
2019-12-17| B06U| Preliminary requirement: requests with searches performed by other patent offices: procedure suspended [chapter 6.21 patent gazette]|
2021-02-09| B09A| Decision: intention to grant|
2021-03-30| B16A| Patent or certificate of addition of invention granted|Free format text: PRAZO DE VALIDADE: 10 (DEZ) ANOS CONTADOS A PARTIR DE 30/03/2021, OBSERVADAS AS CONDICOES LEGAIS. |
优先权:
申请号 | 申请日 | 专利标题
US28509909P| true| 2009-12-09|2009-12-09|
US61/285,099|2009-12-09|
US12/833,541|US8731239B2|2009-12-09|2010-07-09|Systems and methods for tracking objects under occlusion|
US12/833,541|2010-07-09|
PCT/US2010/059685|WO2011072123A2|2009-12-09|2010-12-09|System and method for tracking objects under occlusion|
[返回顶部]